Treating Epilepsy by Reinforcement Learning Via Manifold-Based Simulation
نویسندگان
چکیده
The ability to take intelligent actions in real-world domains is a goal of great interest in the machine learning community. Unfortunately, the real-world is filled with systems that can be partially observed but cannot, as yet, be described by first principle models. Moreover, the traditional paradigm of direct interaction with the environment used in reinforcement learning is often prohibitively expensive in practice. An alternative approach simultaneously solves both of these problems by using simulated interaction with the environment rather than real-world experience. The simulation in this approach is a computational model of a dynamical system. The barrier to linking intelligent control with realworld domains is, therefore, one of identifying high-quality state-spaces and transition functions from observations. From a dynamical systems perspective, this barrier is analogous to the problem of finding high-quality manifold embeddings and a rich literature of theory and practice exists to address it. The contribution of this work is two-fold. First, we describe an approach for learning optimal control strategies directly from observations using manifold embeddings as the intermediate state representation. Second, we demonstrate how control strategies constructed in this way can answer important scientific questions. As a concrete example, we use our approach to guide experimental decisions in neurostimulation treatments of epilepsy.
منابع مشابه
Manifold Embeddings for Model-Based Reinforcement Learning of Neurostimulation Policies
Real-world reinforcement learning problems often exhibit nonlinear, continuous-valued, noisy, partially-observable state-spaces that are prohibitively expensive to explore. The formal reinforcement learning framework, unfortunately, has not been successfully demonstrated in a real-world domain having all of these constraints. We approach this domain with a two-part solution. First, we overcome ...
متن کاملDynamic Obstacle Avoidance by Distributed Algorithm based on Reinforcement Learning (RESEARCH NOTE)
In this paper we focus on the application of reinforcement learning to obstacle avoidance in dynamic Environments in wireless sensor networks. A distributed algorithm based on reinforcement learning is developed for sensor networks to guide mobile robot through the dynamic obstacles. The sensor network models the danger of the area under coverage as obstacles, and has the property of adoption o...
متن کاملTreating Epilepsy via Adaptive Neurostimulation: a Reinforcement Learning Approach
This paper presents a new methodology for automatically learning an optimal neurostimulation strategy for the treatment of epilepsy. The technical challenge is to automatically modulate neurostimulation parameters, as a function of the observed EEG signal, so as to minimize the frequency and duration of seizures. The methodology leverages recent techniques from the machine learning literature, ...
متن کاملManifold-based non-parametric learning of action-value functions
Finding good approximations to state-action value functions is a central problem in model-free on-line reinforcement learning. The use of non-parametric function approximators enables us to simultaneously represent model and confidence. Since Q functions are usually discontinuous, we present a novel Gaussian process (GP) kernel function to cope with discontinuity. We use a manifold-based distan...
متن کاملReinforcement learning based feedback control of tumor growth by limiting maximum chemo-drug dose using fuzzy logic
In this paper, a model-free reinforcement learning-based controller is designed to extract a treatment protocol because the design of a model-based controller is complex due to the highly nonlinear dynamics of cancer. The Q-learning algorithm is used to develop an optimal controller for cancer chemotherapy drug dosing. In the Q-learning algorithm, each entry of the Q-table is updated using data...
متن کامل